90 research outputs found

    MiR-185-5p regulates the development of myocardial fibrosis

    Get PDF
    Background: Cardiac fibrosis stiffens the ventricular wall, predisposes to cardiac arrhythmias and contributes to the development of heart failure. In the present study, our aim was to identify novel miRNAs that regulate the development of cardiac fibrosis and could serve as potential therapeutic targets for myocardial fibrosis. Methods and results: Analysis for cardiac samples from sudden cardiac death victims with extensive myocardial fibrosis as the primary cause of death identified dysregulation of miR-185-5p. Analysis of resident cardiac cells from mice subjected to experimental cardiac fibrosis model showed induction of miR-185-5p expression specifically in cardiac fibroblasts. In vitro, augmenting miR-185-5p induced collagen production and profibrotic activation in cardiac fibroblasts, whereas inhibition of miR-185-5p attenuated collagen production. In vivo, targeting miR-185-5p in mice abolished pressure overload induced cardiac interstitial fibrosis. Mechanistically, miR-185-5p targets apelin receptor and inhibits the anti-fibrotic effects of apelin. Finally, analysis of left ventricular tissue from patients with severe cardiomyopathy showed an increase in miR-185-5p expression together with pro-fibrotic TGF-beta 1 and collagen I. Conclusions: Our data show that miR-185-5p targets apelin receptor and promotes myocardial fibrosis.Peer reviewe

    Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)

    Get PDF

    A multiscale framework for affine invariant pattern recognition and registration

    No full text
    Abstract This thesis presents a multiscale framework for the construction of affine invariant pattern recognition and registration methods. The idea in the introduced approach is to extend the given pattern to a set of affine covariant versions, each carrying slightly different information, and then to apply known affine invariants to each of them separately. The key part of the framework is the construction of the affine covariant set, and this is done by combining several scaled representations of the original pattern. The advantages compared to previous approaches include the possibility of many variations and the inclusion of spatial information on the patterns in the features. The application of the multiscale framework is demonstrated by constructing several new affine invariant methods using different preprocessing techniques, combination schemes, and final recognition and registration approaches. The techniques introduced are briefly described from the perspective of the multiscale framework, and further treatment and properties are presented in the corresponding original publications. The theoretical discussion is supported by several experiments where the new methods are compared to existing approaches. In this thesis the patterns are assumed to be gray scale images, since this is the main application where affine relations arise. Nevertheless, multiscale methods can also be applied to other kinds of patterns where an affine relation is present. An additional application of one multiscale based technique in convexity measurements is introduced. The method, called multiscale autoconvolution, can be used to build a convexity measure which is a descriptor of object shape. The proposed measure has two special features compared to existing approaches. It can be applied directly to gray scale images approximating binary objects, and it can be easily modified to produce a number of measures. The new measure is shown to be straightforward to evaluate for a given shape, and it performs well in the applications, as demonstrated by the experiments in the original paper

    It's in the bag: stronger supervision for automated face labelling

    No full text
    The objective of this work is automatic labelling of characters in TV video and movies, given weak supervisory information provided by an aligned transcript. We make four contributions: (i) a new strategy for obtaining stronger supervisory information from aligned transcripts; (ii) an explicit model for classifying background characters, based on their face-tracks; and (iii) employing new ConvNet based face features. Each of these contributions delivers a significant boost in performance, and we demonstrate this on standard benchmarks using tracks provided by authors of prior work. Finally, (iv), we also investigate the generalization and strength of the features and classifiers by applying them “in the raw” on new episodes where no supervisory information is used. Overall we achieve a dramatic improvement over the state of the art on both TV series and film datasets, almost saturating performance on some benchmarks

    It's in the bag: stronger supervision for automated face labelling

    No full text
    The objective of this work is automatic labelling of characters in TV video and movies, given weak supervisory information provided by an aligned transcript. We make four contributions: (i) a new strategy for obtaining stronger supervisory information from aligned transcripts; (ii) an explicit model for classifying background characters, based on their face-tracks; and (iii) employing new ConvNet based face features. Each of these contributions delivers a significant boost in performance, and we demonstrate this on standard benchmarks using tracks provided by authors of prior work. Finally, (iv), we also investigate the generalization and strength of the features and classifiers by applying them “in the raw” on new episodes where no supervisory information is used. Overall we achieve a dramatic improvement over the state of the art on both TV series and film datasets, almost saturating performance on some benchmarks

    Sparse in space and time: audio-visual synchronisation with trainable selectors

    Get PDF
    The objective of this paper is audio-visual synchronisation of general videos &lsquo;in the wild&rsquo;. For such videos, the events that may be harnessed for synchronisation cues may be spatially small and may occur only infrequently during a many seconds-long video clip, i.e. the synchronisation signal is &lsquo;sparse in space and time&rsquo;. This contrasts with the case of synchronising videos of talking heads, where audio-visual correspondence is dense in both time and space. We make four contributions: (i) in order to handle longer temporal sequences required for sparse synchronisation signals, we design a multi-modal transformer model that employs &lsquo;selectors&rsquo; to distil the long audio and visual streams into small sequences that are then used to predict the temporal offset between streams. (ii) We identify artefacts that can arise from the compression codecs used for audio and video and can be used by audio-visual models in training to artificially solve the synchronisation task. (iii) We curate a dataset with only sparse in time and space synchronisation signals; and (iv) the effectiveness of the proposed model is shown on both dense and sparse datasets quantitatively and qualitatively. Project page: v-iashin.github.io/SparseSync</p

    Automated video face labelling for films and TV material

    No full text
    The objective of this work is automatic labelling of characters in TV video and movies, given weak supervisory information provided by an aligned transcript. We make five contributions: (i) a new strategy for obtaining stronger supervisory information from aligned transcripts; (ii) an explicit model for classifying background characters, based on their face-tracks; (iii) employing new ConvNet based face features, and (iv) a novel approach for labelling all face tracks jointly using linear programming. Each of these contributions delivers a boost in performance, and we demonstrate this on standard benchmarks using tracks provided by authors of prior work. As a fifth contribution, we also investigate the generalisation and strength of the features and classifiers by applying them "in the raw" on new video material where no supervisory information is used. In particular, to provide high quality tracks on those material, we propose efficient track classifiers to remove false positive tracks by the face tracker. Overall we achieve a dramatic improvement over the state of the art on both TV series and film datasets, and almost saturate performance on some benchmarks

    OVE6D:object viewpoint encoding for depth-based 6D object pose estimation

    Get PDF
    Abstract This paper proposes a universal framework, called OVE6D, for model-based 6D object pose estimation from a single depth image and a target object mask. Our model is trained using purely synthetic data rendered from ShapeNet, and, unlike most of the existing methods, it generalizes well on new real-world objects without any fine-tuning. We achieve this by decomposing the 6D pose into viewpoint, in-plane rotation around the camera optical axis and translation, and introducing novel lightweight modules for estimating each component in a cascaded manner. The resulting network contains less than 4M parameters while demon-strating excellent performance on the challenging T-LESS and Occluded LINEMOD datasets without any dataset-specific training. We show that OVE6D outperforms some contemporary deep learning-based pose estimation methods specifically trained for individual objects or datasets with real-world training data. The implementation is available at https://github.com/dingdingcai/OVE6D-pose
    • …
    corecore